The history of invasive and interventional cardiology is complex, with multiple groups working independently on similar technologies. Invasive and interventional cardiology is currently closely associated with cardiologists (physicians who treat the diseases of the heart), though the development and most of its early research and procedures were performed by diagnostic and interventional radiologists.
Contents
|
The history of invasive cardiology begins with the development of cardiac catheterization in 1711, when Stephen Hales placed catheters into the right and left ventricles of a living horse.[1] Variations on the technique were performed over the subsequent century, with formal study of cardiac physiology being performed by Claude Bernard in the 1840s.[2]
The technique of angiography was first developed in 1927 by the Portuguese physician Egas Moniz to provide contrasted x-ray in order to diagnose nervous diseases, such as tumors, coronary heart disease and arteriovenous malformations. He is recognized as one of the pioneers in this field. Werner Forssmann, in 1929, created an incision in one of his left antecubital veins and inserted a catheter into his venous system. He then guided the catheter by fluoroscopy into his right atrium. Subsequently he walked up a flight of stairs to the radiology department and documented the procedure by having a chest roentgenogram performed.[3] Over the next year, catheters were placed in a similar manner into the right ventricle, and measurements of pressure and cardiac output (using the Fick principle) were performed.[4]
In the early 1940s, André Cournand, in collaboration with Dickinson Richards, performed more systematic measurements of the hemodynamics of the heart.[5] For their work in the discovery of cardiac catheterization and hemodynamic measurements, Cournand, Forssmann, and Richards shared the Nobel Prize in Physiology or Medicine in 1956.
In 1958, Interventional Radiologist, Dr. Charles Dotter began working on methods to visualize the coronary anatomy via sequential radiographic films. He invented a method known as occlusive aortography in an animal model. Occlusive aortography involved the transient occlusion of the aorta and subsequent injection of a small amount of radiographic contrast agent into the aortic root and subsequent serial x-rays to visualize the coronary arteries.[6] This method produced impressive images of the coronary anatomy. Dotter later reported that all the animals used in the procedure survived.
Later that same year, while performing an aortic root aortography, Mason Sones, a pediatric cardiologist at the Cleveland Clinic, noted that the catheter had accidentally entered the patient's right coronary artery. Before the catheter could be removed, 30cc of contrast agent had been injected.[7] While the patient went into ventricular fibrillation, the dangerous arrhythmia was terminated by Dr. Sones promptly performing a precordial thump which restored sinus rhythm. This became the world's first selective coronary arteriogram. Until that time, it was believed that even a small amount of contrast agent within a coronary artery would be fatal.
Until the 1950s, placing a catheter into either the arterial or venous system involved a "cut down" procedure, in which the soft tissues were dissected out of the way until the artery or vein was directly visualized and subsequently punctured by a catheter; this was known as the Sones technique. The percutaneous approach that is widely used today was developed by Sven-Ivar Seldinger in 1953.[8][9] This method was used initially for the visualization of the peripheral arteries. Percutaneous access of the artery or vein is still commonly known as the Seldinger technique. The use of the Seldinger technique for visualizing the coronary arteries was described by Ricketts and Abrams in 1962 and Judkins in 1967.[10][11]
By the late 1960s, Melvin Judkins had begun work on creating catheters that were specially shaped to reach the coronary arteries to perform selective coronary angiography. His initial work involved shaping stiff wires and comparing those shapes to radiographs of the ascending aorta to determine if the shape appeared promising. Then he would place the stiff wire inside a flexible catheter and use a heat-fixation method to permanently shape the catheter. In the first use of these catheters in humans, each catheter was specifically shaped to match the size and shape of the aorta of the subject. His work was documented in 1967, and by 1968 the Judkins catheters were manufactured in a limited number of fixed tip shapes.[12] Catheters in these shapes carry his name and are still used to this day for selective coronary angiography.
The use of a balloon-tipped catheter for the treatment of atherosclerotic vascular disease was first described by Charles Dotter and Melvin Judkins in 1964, when they used it to treat a case of atherosclerotic disease in the superficial femoral artery of the left leg.[13][14] Building on their work and his own research involving balloon-tipped catheters, Andreas Gruentzig performed the first success percutaneous transluminal coronary angioplasty (known as PTCA or percutaneous coronary intervention (PCI)) on a human on September 16, 1977 at University Hospital, Zurich.[15] The results of the procedure were presented at the American Heart Association meeting two months later to a stunned audience of cardiologists. In the subsequent three years, Dr. Gruentzig performed coronary angioplasties in 169 patients in Zurich, while teaching the practice of coronary angioplasty to a field of budding interventional cardiologists. It is interesting to note that ten years later, nearly 90 percent of these individuals were still alive.[15] By the mid 1980s, over 300,000 PTCAs were being performed on a yearly basis, equalling the number of bypass surgeries being performed for coronary artery disease.[16]
Soon after Andreas Gruentzig began performing percutaneous interventions on individuals with stable coronary artery disease, multiple groups described the use of catheter-delivered streptokinase for the treatment of acute myocardial infarction (heart attack).[17][18]
In the early years of coronary angioplasty, there were a number of serious complications. Abrupt vessel closure after balloon angioplasty occurred in approximately 1% of cases, often necessitating emergency bypass surgery. Vessel dissection was a frequent issue as a result of improper sizing of the balloon relative to the arterial diameter. Late restenosis occurred in as many as 30% of individuals who underwent PTCA, often causing recurrence of symptoms necessitating repeat procedures.
From the time of the initial percutaneous balloon angioplasty, it was theorized that devices could be placed inside the arteries as scaffolds to keep them open after a successful balloon angioplasty.[13] This did not become a reality in the cardiac realm until the first intracoronary stents were successfully deployed in coronary arteries in 1986.[19][20] The first stents used were self-expanding Wallstents. The use of intracoronary stents was quickly identified as a method to treat some complications due to PTCA,[19] and their use can decrease the incidence of emergency bypass surgery for acute complications post balloon angioplasty.[21]
It was quickly realized that restenosis rates were significantly lower in individuals who received an intracoronary stent when compared to those who underwent just balloon angioplasty.[22] A damper on the immediate use of intracoronary stents was subacute thrombosis. Subacute thrombosis rates with intracoronary stents proved to be about 3.7 percent, higher than the rates seen after balloon angioplasty.[20] Post-procedure bleeding was also an issue, due to the intense combination of anticoagulation and anti-platelet agents used to prevent stent thrombosis.
Stent technology improved rapidly, and in 1989 the Palmaz-Schatz balloon-expandable intracoronary stent was developed.[23][24] Initial results with the Palmaz-Schatz stents were excellent when compared to balloon angioplasty, with a significantly lower incidence of abrupt closure and peri-procedure heart attack.[25] Late restenosis rates with Palmaz-Schatz stents were also significantly improved when compared with balloon angioplasty.[26][27] However, mortality rates were unchanged compared to balloon angioplasty.[28] While the rates of subacute thrombosis and bleeding complications associated with stent placement were high, by 1999 nearly 85% of all PCI procedures included intracoronary stenting.[29]
In recognition of the focused training required by cardiologists to perform percutaneous coronary interventions and the rapid progression in the field of percutaneous coronary interventions, specialized fellowship training in the field of Interventional Cardiology was instituted in 1999.[16]
Through the 1990s and beyond, various incremental improvements were made in balloon and stent technology, as well as newer devices, some of which are still in use today while many more have fallen into disuse. As important as balloon and stent technology had been, it was becoming obvious that the anticoagulation and anti-platelet regimen that individuals received post-intervention was at least as important. Trials in the late 1990s revealed that anticoagulation with warfarin was not required post balloon angioplasty or stent implantation, while intense anti-platelet regimens and changes in procedural technique (most importantly, making sure that the stent was well opposed to the walls of the coronary artery) improved short term and long term outcomes.[30] Many different antiplatelet regimens were evaluated in the 1990s and the turn of the 21st century, with the optimal regimen in an individual patient still being up for debate.
With the high use of intracoronary stents during PCI procedures, the focus of treatment changed from procedural success to prevention of recurrence of disease in the treated area (in-stent restenosis). By the late 1990s it was generally acknowledged among cardiologists that the incidence of in-stent restenosis was between 15 and 30%, and possibly higher in certain subgroups of individuals.[29] Stent manufacturers experimented with (and continue to experiment with) a number of chemical agents to prevent the neointimal hyperplasia that is the cause of in-stent restenosis.
One of the first products of the new focus on preventing late events (such as in stent restenosis and late thrombosis) was the heparin coated Palmaz-Schatz stent.[31] These coated stents were found to have a lower incidence of subacute thrombosis than bare metal stents.[32]
At approximately the same time, Cordis (a division of Johnson & Johnson) was developing the Cypher stent, a stent that would release sirolimus (a chemotherapeutic agent) over time. The first study of these individuals revealed an incredible lack of restenosis (zero percent restenosis) at six months.[33] This led to the approval for the stent to be used in Europe in April 2002.[34] Further trials with the Cypher stent revealed that restenosis did occur in some individuals with high risk features (such as long areas of stenosis or a history of diabetes mellitus), but that the restenosis rate was significantly lower than with bare metal stents (3.2 percent compared to 35.4 percent).[35] About a year after approval in Europe, the United States FDA approved the use of the Cypher stent as the first drug-eluting stent for use in the general population in the United States.[36]
With the significantly lower restenosis rates of drug eluting stents compared to bare metal stents, the interventional cardiology community began using these stents as soon as they became available. Cordis, the manufacturer of the Cypher drug eluting stent, was not able to keep up with the demand for these stents when they first entered the market. This fueled a rationing of Cypher stents; they were used on difficult anatomy and high risk individuals. At the time there was a fear by the general population that these drug eluting stents would not be used on individuals who could not afford them (as they cost significantly more than the bare metal stents of the era).
Concurrent with the development of the Cypher stent, Boston Scientific started development of the Taxus stent. The Taxus stent was the Express2 metal stent, which was in general use for a number of years,[37] with a copolymer coating of paclitaxel that inhibited cell replication. As with the Cypher stent before it, the first trials of the Taxus stent revealed no evidence of in-stent restenosis at six months after the procedure,[38] while later studies showed some restenosis, at a rate much lower than the bare metal counterpart.[39] Based on these trials, the Taxus stent was approved for use in Europe in 2003. With further study,[40] the FDA approved the use of the Taxus stent in the United States in March 2004.[41]
By the end of 2004, drug eluting stents were used in nearly 80 percent of all percutaneous coronary interventions.[42]
Trials of heparin coated stents could not match the significant decrease in restenosis rates seen with the Cypher and Taxus stents. With the increased supply in the chemotherapeutic drug eluting stents available, the use of heparin coated stents wained.
The field of interventional cardiology has had a number of controversies since its inception. In part this is because of the dawning of the randomized control trial as the marker of a successful procedure. This is worsened by the rapid changes in the field of interventional cardiology. Procedures would be used soon after they are described in the literature or at conferences, with trial data determining if the procedure improves outcomes lagging behind by years due to the strict protocols and long follow-up of patients necessary to test the procedure. By the time the trials were published, they would be considered out of date, as they did not reflect the current practice in the field. This led to the inception and use of a number of procedures and devices in the interventional realm that have fallen out of practice due to their being found to not improve outcomes after formal trials have been performed.
Another source of controversy in the field of interventional cardiology is the overlapping roles of PCI and coronary artery bypass surgery for individuals with coronary artery disease. This area has been studied in a number of trials since the early 1990s.[43][44][45] Unfortunately, due to the rapid changes in technique in both bypass surgery as well as PCI, added to the better understanding of the role of intense pharmacologic therapy in individuals with coronary artery disease, questions still remain on the best form of therapy in many subgroups of patients. Multiple ongoing studies hope to tease out which individuals do better with PCI and which do better with CABG,[46] but in general each case is individualized to the patient and the relative comfort level of the interventional cardiologist and the cardiothoracic surgeon.
In the vast majority of cases, percutaneous coronary interventions do not improve mortality when compared to optimal medical therapy in the stable individual. This is, of course, not true in the unstable individual, such as in the setting after a myocardial infarction (heart attack). Even in the stable individuals, however, there are a number of subsets in which there is a mortality benefit that is attributed to PCI.
Subsequently, at the 2007 meeting of the American College of Cardiology (ACC), data from the COURAGE trial was presented, suggested that the combination of PCI and intensive (optimal) medical therapy did not reduce the incidence of death, heart attacks, or stroke compared to intensive medical therapy alone.[47] Critics of the trial state that the trial did not take into account the improvement in symptoms attributed to PCI. Also, the data that was presented was an intention to treat analysis, and that there was a (possibly) significant crossover from the medical therapy arm to the PCI arm of the study. It should also be noted that the optimal medical therapy seen in the COURAGE trial is significantly more aggressive than the current guidelines of the ACC and are not commonly seen in the general cardiology clinic. As with any large clinical trial, the therapies available had changed from when the trial was designed to when the results were presented. In particular, drug eluting stents, while commonly used in practice at the time the results of the trial were presented, were used in less than 5 percent of individuals in the trial.
When the results of the first trials of drug-eluting stents were published, there was a general feeling in the interventional cardiology community that these devices would be part of the perfect revascularization regimen for coronary artery disease. With the very low restenosis rates of the RAVEL[33] and SIRIUS[35] trials, interventions were performed on more complex blockages in the coronary arteries, under the assumption that the results in real life would mimic the results in the trials. The antiplatelet regimens that were advised for the drug eluting stents were based on the early trials of these stents. Based on these trials, the antiplatelet regimen was a combination of aspirin and clopidogrel for 3 months when Cypher stents were used,[35] and 9 months when Taxus stents were used,[48] followed by aspirin indefinitely.
Soon, case reports started being published regarding late stent thrombosis.[49] At the 2006 annual meeting of the American College of Cardiology, preliminary results of the BASKET-LATE trial were presented, which showed a slight increase in late thrombosis associated with drug eluting stents over bare metal stents.[50] However, this increase was not statistically significant, and further data would have to be collected. Further data published over the following year had conflicting results,[51] and it was unclear whether stent thrombosis was truly higher when compared to bare metal stents. During this time of uncertainty, many cardiologists started extending the dual antiplatelet regimen of aspirin and clopidogrel in these individuals, as some data suggested that it may prevent late thrombosis.[52]
The FDA held an expert panel in December 2006 to go over the data presented by Cordis and Boston Scientific to determine if drug eluting stents should be considered less safe than bare metal stents.[53] It became evident at the meeting that with all the data published there were varied definitions of late thrombosis and key differences in the types of lesions in different studies, hampering analysis of the data.[42] It was also noted that with the advent of drug eluting stents, interventional cardiologists began performing procedures on more complex lesions, subsequently using the drug eluting stents in "off label" coronary artery lesions, which would otherwise go untreated or for bypass surgery.[42] The FDA advisory board reiterated the ACC guidelines that clopidogrel should be continued for 12 months after drug eluting stent placement in individuals who are at low risk for bleeding.[54][55]